21 research outputs found

    Brave New GES World:A Systematic Literature Review of Gestures and Referents in Gesture Elicitation Studies

    Get PDF
    How to determine highly effective and intuitive gesture sets for interactive systems tailored to end users’ preferences? A substantial body of knowledge is available on this topic, among which gesture elicitation studies stand out distinctively. In these studies, end users are invited to propose gestures for specific referents, which are the functions to control for an interactive system. The vast majority of gesture elicitation studies conclude with a consensus gesture set identified following a process of consensus or agreement analysis. However, the information about specific gesture sets determined for specific applications is scattered across a wide landscape of disconnected scientific publications, which poses challenges to researchers and practitioners to effectively harness this body of knowledge. To address this challenge, we conducted a systematic literature review and examined a corpus of N=267 studies encompassing a total of 187, 265 gestures elicited from 6, 659 participants for 4, 106 referents. To understand similarities in users’ gesture preferences within this extensive dataset, we analyzed a sample of 2, 304 gestures extracted from the studies identified in our literature review. Our approach consisted of (i) identifying the context of use represented by end users, devices, platforms, and gesture sensing technology, (ii) categorizing the referents, (iii) classifying the gestures elicited for those referents, and (iv) cataloging the gestures based on their representation and implementation modalities. Drawing from the findings of this review, we propose guidelines for conducting future end-user gesture elicitation studies

    Designing Efficient and Customizable Radar-based Gesture Interfaces

    No full text
    Radar sensors have many advantages compared to traditional vision-based sensors for gesture recognition. They work in poor lighting and weather conditions, come with less privacy concerns than vision-based sensors, and can be integrated into everyday objects. However, due to the size and complexity of radar data, most radar-based systems rely on deep-learning techniques for gesture recognition. These systems take time to train and make it challenging to support quick customization by the user, such as changing the gesture set of an application. In this thesis, we want to investigate whether we can create efficient and customizable radar-based gesture interfaces, by reducing the size of raw radar data and relying on simple template matching algorithms for gesture recognition. We have already implemented a pipeline that handles these steps and tested its performance on a dataset of 20 gestures performed by three participants in front of a cheap, off-the-shelf FMCW radar. The next steps include developing a software environment for testing recognition techniques on radar gestures, optimizing our pipeline for real-time gesture recognition, and investigating two new use cases: environments where the radar is obstructed by some materials (wood, glass, and PVC) and breathing patterns recognition

    Sluÿters, Arthur

    No full text

    Mid-air Gesture Recognition by Ultra-Wide Band Radar Echoes

    No full text
    Microwave radar sensors in human-computer interaction promote several advantages over wearable and image-based sensors, such as privacy preservation, high reliability regardless of the ambient and lighting conditions, and larger field of view. However, the raw signals produced by such radars are high-dimension and very complex to process and interpret for gesture recognition. For these reasons, machine learning techniques have been mainly used for gesture recognition, but require a significant amount of gesture templates for training and calibration that are specific for each radar. To address these challenges in the context of mid-air gesture interaction, we introduce a data processing pipeline for hand gesture recognition adopting a model-based approach that combines full-wave electromagnetic modeling and inversion. Thanks to this model, gesture recognition is reduced to handling two dimensions: the hand-radar distance and the relative dielectric permittivity, which depends on the hand only (e.g., size, surface, electric properties, orientation). We are developing a software environment that accommodates the significant stages of our pipeline towards final gesture recognition. We already tested it on a dataset of 16 gesture classes with 5 templates per class recorded with the Walabot, a lightweight, off-the-shelf array radar. We are now studying how user-defined radar gestures resulting from gesture elicitation studies could be properly recognized or not by our gesture recognition engine

    Hand Gesture Recognition for an Off-the-Shelf Radar by Electromagnetic Modeling and Inversion

    No full text
    Microwave radar sensors in human-computer interactions have several advantages compared to wearable and image-based sensors, such as privacy preservation, high reliability regardless of the ambient and lighting conditions, and larger field of view. However, the raw signals produced by such radars are high-dimension and relatively complex to interpret. Advanced data processing, including machine learning techniques, is therefore necessary for gesture recognition. While these approaches can reach high gesture recognition accuracy, using artificial neural networks requires a significant amount of gesture templates for training and calibration is radar-specific. To address these challenges, we present a novel data processing pipeline for hand gesture recognition that combines advanced full-wave electromagnetic modelling and inversion with machine learning. In particular, the physical model accounts for the radar source, radar antennas, radar-target interactions and target itself, i.e., the hand in our case. To make this processing feasible, the hand is emulated by an equivalent infinite planar reflector, for which analytical Green’s functions exist. The apparent dielectric permittivity, which depends on the hand size, electric properties, and orientation, determines the wave reflection amplitude based on the distance from the hand to the radar. Through full-wave inversion of the radar data, the physical distance as well as this apparent permittivity are retrieved, thereby reducing by several orders of magnitude the dimension of the radar dataset, while keeping the essential information. Finally, the estimated distance and apparent permittivity as a function of gesture time are used to train the machine learning algorithm for gesture recognition. This physically-based dimension reduction enables the use of simple gesture recognition algorithms, such as template-matching recognizers, that can be trained in real time and provide competitive accuracy with only a few samples. We evaluate significant stages of our pipeline on a dataset of 16 gesture classes, with 5 templates per class, recorded with the Walabot, a lightweight, off-the-shelf array radar. We also compare these results with an ultra wideband radar made of a single horn antenna and lightweight vector network analyzer, and a Leap Motion Controller

    SnappView, A Software Development Kit for Supporting End-user Mobile Interface Review

    No full text
    This paper presents SnappView, an open-source software development kit that facilitates end-user review of graphical user interfaces for mobile applications and streamlines their input into a continuous design life cycle. \sv structures this user interface review process into four cumulative stages: (1) a developer creates a mobile application project with user interface code instrumented by only a few instructions governing \sv and deploys the resulting application on an application store; (2) any tester, such as an end-user, a designer, a reviewer, while interacting with the instrumented user interface, shakes the mobile device to freeze and capture its screen and to provide insightful multimodal feedback such as textual comments, critics, suggestions, drawings by stroke gestures, voice or video records, with a level of importance; (3) the screenshot is captured with the application, browser, and status data and sent with the feedback to SnappView server; and (4) a designer then reviews collected and aggregated feedback data and passes them to the developer to address raised usability problems. Another cycle then initiates an iterative design. This paper presents the motivations and process for performing mobile application review based on SnappView. Based on this process, we deployed on the AppStore ``WeTwo'', a real-world mobile application to find various personal activities over a one-month period with 420 active users. This application served for a user experience evaluation conducted with N1=14 developers to reveal the advantages and shortcomings of the toolkit from a development point of view. The same application was also used in a usability evaluation conducted with N2=22 participants to reveal the advantages and shortcomings from an end-user viewpoint

    Engineering Slidable Graphical User Interfaces with Slime

    No full text
    Intra-platform plasticity regularly assumes that the display of a computing platform remains fixed and rigid during interactions with the platform in contrast to reconfigurable displays, which can change form depending on the context of use. In this paper, we present a model-based approach for designing and deploying graphical user interfaces that support intra-platform plasticity for reconfigurable displays. We instantiate the model for E3Screen, a new device that expands a conventional laptop with two slidable, rotatable, and foldable lateral displays, enabling slidable user interfaces. Based on a UML class diagram as a domain model and a SCRUD list as a task model, we define an abstract user interface as interaction units with a corresponding master-detail design pattern. We then map the abstract user interface to a concrete user interface by applying rules for the reconfiguration, concrete interaction, unit allocation, and widget selection and implement it in JavaScript. In a first experiment, we determine display configurations most preferred by users, which we organize in the form of a state-transition diagram. In a second experiment, we address reconfiguration rules and widget selection rules. A third experiment provides insights into the impact of the lateral displays on a visual search task

    Evaluating a Large Language Model on Searching for GUI Layouts

    No full text
    The field of generative artificial intelligence has seen significant advancements in recent years with the advent of large language models, which have shown impressive results in software engineering tasks but not yet in engineering user interfaces. Thus, we raise a specific research question: would an LLM-based system be able to search for relevant GUI layouts? To address this question, we conducted a controlled study evaluating how Instigator, an LLM-based system for searching GUI layouts of web pages by generative pre-trained training, would return GUI layouts that are relevant to a given instruction and what would be the user experience of (N=34) practitioners interacting with Instigator. Our results identify a very high similarity and a moderate correlation between the rankings of the GUI layouts generated by Instigator and the rankings of the practitioners with respect to their relevance to a given design instruction. We highlight the results obtained through thirteen UEQ+ scales that characterize the user experience of the practitioner with Instigator, which we use to discuss perspectives for improving such future tool

    Model-based intelligent user interface adaptation: challenges and future directions

    No full text
    Adapting the user interface of a software system to the requirements of the context of use continues to be a major challenge, particularly when users become more demanding in terms of adaptation quality. A considerable number of methods have, over the past three decades, provided some form of modelling with which to support user interface adaptation. There is, however, a crucial issue as regards in analysing the concepts, the underlying knowledge, and the user experience afforded by these methods as regards comparing their benefits and shortcomings. These methods are so numerous that positioning a new method in the state of the art is challenging. This paper, therefore, defines a conceptual reference framework for intelligent user interface adaptation containing a set of conceptual adaptation properties that are useful for model-based user interface adaptation. The objective of this set of properties is to understand any method, to compare various methods and to generate new ideas for adaptation. We also analyse the opportunities that machine learning techniques could provide for data processing and analysis in this context, and identify some open challenges in order to guarantee an appropriate user experience for end-users. The relevant literature and our experience in research and industrial collaboration have been used as the basis on which to propose future directions in which these challenges can be addresse

    QuantumLeap, a Framework for Engineering Gestural User Interfaces based on the Leap Motion Controller

    No full text
    Despite the tremendous progress made for recognizing gestures acquired by various devices, such as the Leap Motion Controller, developing a gestural user interface based on such devices still induces a significant programming and software engineering effort before obtaining a running interactive application. To facilitate this development, we present QuantumLeap, a framework for engineering gestural user interfaces based on the Leap Motion Controller. Its pipeline software architecture can be parameterized to define a workflow among modules for acquiring gestures from the Leap Motion Controller, for segmenting them, recognizing them, and managing their mapping to functions of the application. To demonstrate its practical usage, we implement two gesture-based applications: an image viewer that allows healthcare workers to browse DICOM medical images of their patients without any hygiene issues commonly associated with touch user interfaces and a large-scale application for managing multimedia contents on wall screens. To evaluate the usability of QuantumLeap, seven participants took part in an experiment in which they used QuantumLeap to add a gestural interface to an existing application
    corecore